The High-Luminosity Large Hadron Collider (HL-LHC) will generate data at rates that defy classical computation. With 40 million collisions per second, we are tasked with reconnecting 100,000 discrete points into accurate particle trajectories.
This is a combinatorial nightmare (O(N)). We need an algorithm that is not just accurate, but fast enough for hardware triggers.
The initial phase leveraged a Graph Neural Network (GNN) to address the combinatorial complexity of the TrackML challenge. Each detector hit was represented as a node, and potential connections (edges) were constructed based on geometric and physical constraints. The GNN was trained on labeled data to classify edges as true or false, using features such as spatial coordinates, layer information, and local hit density. The model architecture included multiple message-passing layers and edge classification heads, enabling the network to learn complex spatial relationships and suppress noise.
Technical Details: The GNN was implemented using PyTorch Geometric, with a custom loss function to balance precision and recall. Training involved data augmentation and hard negative mining to improve generalization. The output was a pruned graph where only high-confidence edges remained, dramatically reducing the search space for subsequent tracking algorithms.
Result: The GNN approach achieved a substantial reduction in combinatorial candidates and improved initial track seeding, but struggled with ambiguous topologies and overlapping tracks, motivating the need for hybridization.
In the second phase, a hybrid pipeline was developed. The GNN output served as a filter, and a classical track-building algorithm (inspired by combinatorial Kalman filtering) was applied to the reduced graph. This stage used physics-based constraints—such as momentum conservation, curvature in the magnetic field, and hit continuity—to assemble valid tracks from the GNN-pruned candidates.
Technical Details: The classical component used a breadth-first search to grow tracks, scoring candidates with a custom cost function incorporating both GNN edge confidence and physical plausibility. Tracks were iteratively extended and validated, with ambiguous hits resolved using a global assignment strategy. This approach balanced the speed of deep learning with the interpretability and robustness of physics-based logic.
Result: The mixed approach achieved high accuracy and efficiency, outperforming standalone GNN or classical methods. However, it still required iterative refinement and was sensitive to the GNN's edge predictions in dense regions.
The third phase focused on a purely physics-driven solution, leveraging established algorithms from high-energy physics. Techniques such as the Hough transform and combinatorial Kalman filter were used to identify track candidates directly from the raw hit data, without deep learning pre-filtering. These methods exploited the helical trajectories of charged particles in a magnetic field, using parameter space voting and recursive state estimation to reconstruct tracks.
Technical Details: The Hough transform mapped hits into a parameter space (e.g., curvature, angle), where tracks appeared as peaks. The Kalman filter iteratively updated track parameters as new hits were added, accounting for measurement uncertainty and multiple scattering. Post-processing steps included duplicate removal and outlier rejection based on chi-squared statistics.
Result: The classical physics-based approach provided robust, interpretable results and served as a strong baseline. However, it was computationally intensive and less scalable for the extreme event rates of the HL-LHC, highlighting the need for quantum and hybrid solutions.
We realized that to beat the speed limit, we had to stop searching for tracks and start solving the system as a whole. We moved from Iterative logic to Quantum Physics.
We use a Geometric Deep Learning pipeline (GNN) to construct a "Graph of Possibilities." It doesn't solve the problem; it strips away the obvious noise, leaving us with a clean scaffolding of potential connections.
This is the core innovation. We translate the laws of physics into a mathematical equation called a Hamiltonian (H).
Classical algorithms (like the Kaggle solution) get stuck in "local minima"—they find a good path and stop looking.
Quantum Annealing plays by different rules. By exploiting quantum tunneling, our solver can pass through energy barriers, exploring the entire solution space simultaneously. It doesn't iterate; it settles into the optimal solution instantly.